On variance reduction for stochastic smooth convex optimization with multiplicative noise
نویسندگان
چکیده
منابع مشابه
Variance Reduction for Faster Non-Convex Optimization
We consider the fundamental problem in non-convex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on first-order non-convex optimization remain to be full gradient descent that converges in O(1/ε) iterations for smooth objectives, and stochastic gradient descent that converges ...
متن کاملContinuous dependence on coefficients for stochastic evolution equations with multiplicative Levy Noise and monotone nonlinearity
Semilinear stochastic evolution equations with multiplicative L'evy noise are considered. The drift term is assumed to be monotone nonlinear and with linear growth. Unlike other similar works, we do not impose coercivity conditions on coefficients. We establish the continuous dependence of the mild solution with respect to initial conditions and also on coefficients. As corollaries of ...
متن کاملOnline Variance Reduction for Stochastic Optimization
Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform importance sampling techniques, which take the structure of the dataset into account. In this work, we investigate a recently pr...
متن کاملVariance Reduction for Stochastic Gradient Optimization
Stochastic gradient optimization is a class of widely used algorithms for training machine learning models. To optimize an objective, it uses the noisy gradient computed from the random data samples instead of the true gradient computed from the entire dataset. However, when the variance of the noisy gradient is large, the algorithm might spend much time bouncing around, leading to slower conve...
متن کاملStochastic Variance Reduction for Nonconvex Optimization
We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (Svrg) methods for them. Svrg and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (Sgd); but their theoretical analysis almost exclusively assumes convexity. In contrast, we prove non-asymptotic rates of convergence (to stationary...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematical Programming
سال: 2018
ISSN: 0025-5610,1436-4646
DOI: 10.1007/s10107-018-1297-x